6 research outputs found

    Multimodal Word Sense Translation

    Get PDF

    Word vector-space embeddings of natural language data over time

    Get PDF
    Words are often mapped to vectors in a vector-space (Euclidean-space). Such mappings, also called embeddings, are used in many Natural Language Processing (NLP) tasks. These word embeddings are, generally, intended to reflect the usage, semantic similarities and relatedness of the words they represent. Simply put, word embeddings reflect the meaning of the words relative to other words. However, word meanings are known to change over time (semantic change). Current publicly available word vector-space embeddings are ‘static’ in nature with no temporal component. Creating ‘dynamic’ word embeddings by adding temporal information opens the possibility of capturing the phenomenon of semantic change. These embeddings (with temporal component) can be used to produce visual animation of semantic change and change in word relations over time. It also has the potential to improve performance of various NLP tasks, particularly those involving time like the task of Diachronic Text Evaluation. This project achieves the following: (1) Create word embeddings with time component (dynamic embeddings) that captures the meaning/usage/similarities of words across various times ranging between the years 1800 and 2008. (2) Develop a tool/software that animates changes in word relations using the dynamic embeddings. (3) Evaluate the dynamic embeddings created using word similarity measures and Diachronic Text Evaluation task

    MultiSubs: A Large-scale Multimodal and Multilingual Dataset

    Full text link
    This paper introduces a large-scale multimodal and multilingual dataset that aims to facilitate research on grounding words to images in their contextual usage in language. The dataset consists of images selected to unambiguously illustrate concepts expressed in sentences from movie subtitles. The dataset is a valuable resource as (i) the images are aligned to text fragments rather than whole sentences; (ii) multiple images are possible for a text fragment and a sentence; (iii) the sentences are free-form and real-world like; (iv) the parallel texts are multilingual. We set up a fill-in-the-blank game for humans to evaluate the quality of the automatic image selection process of our dataset. We show the utility of the dataset on two automatic tasks: (i) fill-in-the blank; (ii) lexical translation. Results of the human evaluation and automatic models demonstrate that images can be a useful complement to the textual context. The dataset will benefit research on visual grounding of words especially in the context of free-form sentences, and can be obtained from https://doi.org/10.5281/zenodo.5034604 under a Creative Commons licence.Comment: Manuscript update: (i) Added links to the dataset and evaluation toolkit; (ii) Section 6.1.4: Added random and n-gram baselines to the fill-in-the-blank task, and added further discussion at the end of the section; (iii) Section 6.2.3: Further elaboration on the ALI metric; (iv) Section 6.2.4: Corrected results for the lexical translation task (Table 8), and updated the discussions accordingl
    corecore